Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Transl Vis Sci Technol ; 12(11): 18, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37962538

RESUMO

Purpose: To objectively quantify near-work gaze behaviors and the visual environment during reading tasks performed on a smartphone and on paper in both indoor and outdoor environments in myopes and emmetropes. Methods: A novel wearable gaze and viewing distance tracking device was used to quantify near-work gaze behaviors (focusing demand) and the visual environment (20° peripheral scene relative defocus) during a series of reading tasks. Data from nine myopes (mean age, 21 ± 1.4 years) and 10 emmetropes (21 ± 0.8 years) were analyzed. Five-minute reading tasks (matched for font type and size) were performed under four conditions: reading from a smartphone indoors, paper indoors, smartphone outdoors, and paper outdoors. Results: A significantly greater focusing demand (closer viewing distance) was found with smartphone-based reading (mean, 3.15 ± 0.74 D) compared to paper-based reading (2.67 ± 0.48 D) (P < 0.001), with the differences being greatest for myopic participants (P = 0.04). Smartphone reading was also associated with greater peripheral scene relative myopic defocus (P < 0.001). Although near-work behaviors were similar between environments, significantly more relative myopic defocus was found at the start of the paper-based task when performed outdoors compared to indoors (P = 0.02). Conclusions: Significant differences in focusing demand and scene relative defocus within a 20° field were found to be associated with reading tasks performed on a smartphone and paper in indoor and outdoor environments. Translational Relevance: These findings highlight the complex interaction between near-work behaviors and the visual environment and demonstrate that factors of potential importance to myopia development vary between paper-based and smartphone-based near tasks.


Assuntos
Fixação Ocular , Miopia , Humanos , Adulto Jovem , Miopia/diagnóstico , Miopia/epidemiologia , Meio Ambiente , Leitura
2.
Artigo em Inglês | MEDLINE | ID: mdl-31944954

RESUMO

Knee arthroscopy is a complex minimally invasive surgery that can cause unintended injuries to femoral cartilage or postoperative complications, or both. Autonomous robotic systems using real-time volumetric ultrasound (US) imaging guidance hold potential for reducing significantly these issues and for improving patient outcomes. To enable the robotic system to navigate autonomously in the knee joint, the imaging system should provide the robot with a real-time comprehensive map of the surgical site. To this end, the first step is automatic image quality assessment, to ensure that the boundaries of the relevant knee structures are defined well enough to be detected, outlined, and then tracked. In this article, a recently developed one-class classifier deep learning algorithm was used to discriminate among the US images acquired in a simulated surgical scenario on which the femoral cartilage either could or could not be outlined. A total of 38 656 2-D US images were extracted from 151 3-D US volumes, collected from six volunteers, and were labeled as "1" or as "0" when an expert was or was not able to outline the cartilage on the image, respectively. The algorithm was evaluated using the expert labels as ground truth with a fivefold cross validation, where each fold was trained and tested on average with 15 640 and 6246 labeled images, respectively. The algorithm reached a mean accuracy of 78.4% ± 5.0, mean specificity of 72.5% ± 9.4, mean sensitivity of 82.8% ± 5.8, and mean area under the curve of 85% ± 4.4. In addition, interobserver and intraobserver tests involving two experts were performed on an image subset of 1536 2-D US images. Percent agreement values of 0.89 and 0.93 were achieved between two experts (i.e., interobserver) and by each expert (i.e., intraobserver), respectively. These results show the feasibility of the first essential step in the development of automatic US image acquisition and interpretation systems for autonomous robotic knee arthroscopy.


Assuntos
Artroscopia/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Articulação do Joelho/diagnóstico por imagem , Ultrassonografia/métodos , Adulto , Algoritmos , Cartilagem/diagnóstico por imagem , Cartilagem/cirurgia , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Humanos , Articulação do Joelho/cirurgia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...